Data sets in the biological domain are often semantically complex and difficult to integrate and ... more Data sets in the biological domain are often semantically complex and difficult to integrate and visualise. Converting between the file formats required by interactive analysis tools and those used by the global databases is a costly and error prone process. This paper describes a data model designed to enable efficient rendering of and interaction with biological data, and two demonstrator applications from different fields of protein analysis that provide co-ordinated views of data held in the underlying model
Motivation In the biological sciences, the need to analyse vast amounts of information has become... more Motivation In the biological sciences, the need to analyse vast amounts of information has become commonplace. Such large-scale analyses often involve drawing together data from a variety of different databases, held remotely on the internet or locally on in-house servers. Supporting these tasks are ad hoc collections of data-manipulation tools, scripting languages and visualisation software, which are often combined in arcane ways to create cumbersome systems that have been customised for a particular purpose, and are consequently not readily adaptable to other uses. For many day-to-day bioinformatics tasks, the sizes of current databases, and the scale of the analyses necessary, now demand increasing levels of automation; nevertheless, the unique experience and intuition of human researchers is still required to interpret the end results in any meaningful biological way. Putting humans in the loop requires tools to support real-time interaction with these vast and complex data-sets. Numerous tools do exist for this purpose, but many do not have optimal interfaces, most are effectively isolated from other tools and databases owing to incompatible data formats, and many have limited real-time performance when applied to realistically large data-sets: much of the user's cognitive capacity is therefore focused on controlling the software and manipulating esoteric file formats rather than on performing the research. Methods To confront these issues, harnessing expertise in human-computer interaction (HCI), high-performance rendering and distributed systems, and guided by bioinformaticians and end-user biologists, we are building reusable software components that, together, create a toolkit that is both architecturally sound from a computing point of view, and addresses both user and developer requirements. Key to the system's usability is its direct exploitation of semantics, which, crucially, gives individual components knowledge of their own functionality and allows them to interoperate seamlessly, removing many of the existing barriers and bottlenecks from standard bioinformatics tasks. Results The toolkit, named Utopia, is freely available from http://utopia.cs.man.ac.uk/.
In silico experiments have hitherto required ad hoc collections of scripts and programs to proces... more In silico experiments have hitherto required ad hoc collections of scripts and programs to process and visualise biological data, consuming substantial amounts of time and effort to build, and leading to tools that are difficult to use, are architecturally fragile and scale poorly. With examples of the systems applied to real biological problems, we describe two complimentary software frameworks that address this problem in a principled manner; $^{\textrm{\small{my}}}$ Grid Taverna, a workflow design and enactment environment enabling coherent experiments to be built, and UTOPIA, a flexible visualisation system to aid in examining experimental results.
The authors investigated the extent to which route learning in a virtual environment (VE) transfe... more The authors investigated the extent to which route learning in a virtual environment (VE) transfers to the real world. In Experiment 1, active VE exploration, on its own or with a map, produced better transfer of training than either no VE training at all or passive VE training; however, transfer was achieved after shorter training times with the map. Experiment 2 demonstrated that VE + map training was not superior to training with a map alone, and Experiment 3 demonstrated that the poorer performances observed after passive VE training were not simply due to a lack of attention but to the lack of active navigational decisions. The authors concluded that the present VE technology does not provide better route learning than studying a map.
Three experiments compared the performances of adult participants (three groups of 10) on a perce... more Three experiments compared the performances of adult participants (three groups of 10) on a perceptuo-motor task in both real world (RW) and virtual environments (VEs). The task involved passing a hoop over a bent wire course, and three versions of the task were used: a 3-D wire course with no background, a flattened version of the 3-D course (2(1/2)-D course) with no background, and the 2(1/2)-D course with added background to provide spatial context. In all three experiments the participants had to prevent the hoop from touching the wire as they moved it. In the first experiment, the VE condition produced about 18 times more errors than the RW task. The VE 2(1/2)-D task was found to be as difficult as the 3-D, and the 2(1/2)-D with the added background produced more errors than the other two experiments. Taken together, the experiments demonstrate the difficulty of performing fine motor tasks in VEs, a phenomenon that has not been given due attention in many previous studies of motor control in VEs.
As shared virtual environments move beyond mere exemplars of computer graphics techniques, and ev... more As shared virtual environments move beyond mere exemplars of computer graphics techniques, and evolve more meaningful content, issues such as 'ownership' or 'access' become important. Some artefacts in synthetic environments have 'real world' counterparts and require 'real world' access control and security. Others are entirely synthetic, and may require more esoteric access control. In either case, support for access model definition is becoming an important consideration for the future of virtual environments. In this paper we explore the different types of access required by different styles of VE, and develop a novel access model appropriate for these situations
This paper describes a publicly available virtual reality (VR) system, GNU/MAVERIK, which forms o... more This paper describes a publicly available virtual reality (VR) system, GNU/MAVERIK, which forms one component of a complete VR operating system. We give an overview of the architecture of MAVERIK, and show how it is designed to use application data in an intelligent way, via a simple, yet powerful, callback mechanism that supports an object-oriented framework of classes, objects, and methods. Examples are given to illustrate different uses of the system and typical performance levels.
ABSTRACT In this paper we present work undertaken by the Advanced Inter-faces Group at the Univer... more ABSTRACT In this paper we present work undertaken by the Advanced Inter-faces Group at the University of Manchester on the design and development of a system to support large numbers of geographi-cally distributed users in complex, large-scale virtual environments (VEs).
Data sets in the biological domain are often semantically complex and difficult to integrate and ... more Data sets in the biological domain are often semantically complex and difficult to integrate and visualise. Converting between the file formats required by interactive analysis tools and those used by the global databases is a costly and error prone process. This paper describes a data model designed to enable efficient rendering of and interaction with biological data, and two demonstrator applications from different fields of protein analysis that provide co-ordinated views of data held in the underlying model
Motivation In the biological sciences, the need to analyse vast amounts of information has become... more Motivation In the biological sciences, the need to analyse vast amounts of information has become commonplace. Such large-scale analyses often involve drawing together data from a variety of different databases, held remotely on the internet or locally on in-house servers. Supporting these tasks are ad hoc collections of data-manipulation tools, scripting languages and visualisation software, which are often combined in arcane ways to create cumbersome systems that have been customised for a particular purpose, and are consequently not readily adaptable to other uses. For many day-to-day bioinformatics tasks, the sizes of current databases, and the scale of the analyses necessary, now demand increasing levels of automation; nevertheless, the unique experience and intuition of human researchers is still required to interpret the end results in any meaningful biological way. Putting humans in the loop requires tools to support real-time interaction with these vast and complex data-sets. Numerous tools do exist for this purpose, but many do not have optimal interfaces, most are effectively isolated from other tools and databases owing to incompatible data formats, and many have limited real-time performance when applied to realistically large data-sets: much of the user's cognitive capacity is therefore focused on controlling the software and manipulating esoteric file formats rather than on performing the research. Methods To confront these issues, harnessing expertise in human-computer interaction (HCI), high-performance rendering and distributed systems, and guided by bioinformaticians and end-user biologists, we are building reusable software components that, together, create a toolkit that is both architecturally sound from a computing point of view, and addresses both user and developer requirements. Key to the system's usability is its direct exploitation of semantics, which, crucially, gives individual components knowledge of their own functionality and allows them to interoperate seamlessly, removing many of the existing barriers and bottlenecks from standard bioinformatics tasks. Results The toolkit, named Utopia, is freely available from http://utopia.cs.man.ac.uk/.
In silico experiments have hitherto required ad hoc collections of scripts and programs to proces... more In silico experiments have hitherto required ad hoc collections of scripts and programs to process and visualise biological data, consuming substantial amounts of time and effort to build, and leading to tools that are difficult to use, are architecturally fragile and scale poorly. With examples of the systems applied to real biological problems, we describe two complimentary software frameworks that address this problem in a principled manner; $^{\textrm{\small{my}}}$ Grid Taverna, a workflow design and enactment environment enabling coherent experiments to be built, and UTOPIA, a flexible visualisation system to aid in examining experimental results.
The authors investigated the extent to which route learning in a virtual environment (VE) transfe... more The authors investigated the extent to which route learning in a virtual environment (VE) transfers to the real world. In Experiment 1, active VE exploration, on its own or with a map, produced better transfer of training than either no VE training at all or passive VE training; however, transfer was achieved after shorter training times with the map. Experiment 2 demonstrated that VE + map training was not superior to training with a map alone, and Experiment 3 demonstrated that the poorer performances observed after passive VE training were not simply due to a lack of attention but to the lack of active navigational decisions. The authors concluded that the present VE technology does not provide better route learning than studying a map.
Three experiments compared the performances of adult participants (three groups of 10) on a perce... more Three experiments compared the performances of adult participants (three groups of 10) on a perceptuo-motor task in both real world (RW) and virtual environments (VEs). The task involved passing a hoop over a bent wire course, and three versions of the task were used: a 3-D wire course with no background, a flattened version of the 3-D course (2(1/2)-D course) with no background, and the 2(1/2)-D course with added background to provide spatial context. In all three experiments the participants had to prevent the hoop from touching the wire as they moved it. In the first experiment, the VE condition produced about 18 times more errors than the RW task. The VE 2(1/2)-D task was found to be as difficult as the 3-D, and the 2(1/2)-D with the added background produced more errors than the other two experiments. Taken together, the experiments demonstrate the difficulty of performing fine motor tasks in VEs, a phenomenon that has not been given due attention in many previous studies of motor control in VEs.
As shared virtual environments move beyond mere exemplars of computer graphics techniques, and ev... more As shared virtual environments move beyond mere exemplars of computer graphics techniques, and evolve more meaningful content, issues such as 'ownership' or 'access' become important. Some artefacts in synthetic environments have 'real world' counterparts and require 'real world' access control and security. Others are entirely synthetic, and may require more esoteric access control. In either case, support for access model definition is becoming an important consideration for the future of virtual environments. In this paper we explore the different types of access required by different styles of VE, and develop a novel access model appropriate for these situations
This paper describes a publicly available virtual reality (VR) system, GNU/MAVERIK, which forms o... more This paper describes a publicly available virtual reality (VR) system, GNU/MAVERIK, which forms one component of a complete VR operating system. We give an overview of the architecture of MAVERIK, and show how it is designed to use application data in an intelligent way, via a simple, yet powerful, callback mechanism that supports an object-oriented framework of classes, objects, and methods. Examples are given to illustrate different uses of the system and typical performance levels.
ABSTRACT In this paper we present work undertaken by the Advanced Inter-faces Group at the Univer... more ABSTRACT In this paper we present work undertaken by the Advanced Inter-faces Group at the University of Manchester on the design and development of a system to support large numbers of geographi-cally distributed users in complex, large-scale virtual environments (VEs).
Uploads
Papers by Steve Pettifer